connectionist model
Information Structure in Mappings: An Approach to Learning, Representation, and Generalisation
Despite the remarkable success of large large-scale neural networks, we still lack unified notation for thinking about and describing their representational spaces. We lack methods to reliably describe how their representations are structured, how that structure emerges over training, and what kinds of structures are desirable. This thesis introduces quantitative methods for identifying systematic structure in a mapping between spaces, and leverages them to understand how deep-learning models learn to represent information, what representational structures drive generalisation, and how design decisions condition the structures that emerge. To do this I identify structural primitives present in a mapping, along with information theoretic quantifications of each. These allow us to analyse learning, structure, and generalisation across multi-agent reinforcement learning models, sequence-to-sequence models trained on a single task, and Large Language Models. I also introduce a novel, performant, approach to estimating the entropy of vector space, that allows this analysis to be applied to models ranging in size from 1 million to 12 billion parameters. The experiments here work to shed light on how large-scale distributed models of cognition learn, while allowing us to draw parallels between those systems and their human analogs. They show how the structures of language and the constraints that give rise to them in many ways parallel the kinds of structures that drive performance of contemporary neural networks.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Texas (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Overview (1.00)
- Education (0.92)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.92)
- Health & Medicine > Therapeutic Area (0.67)
- (2 more...)
Philosophy of Cognitive Science in the Age of Deep Learning
Deep learning has enabled major advances across most areas of artificial intelligence research. This remarkable progress extends beyond mere engineering achievements and holds significant relevance for the philosophy of cognitive science. Deep neural networks have made significant strides in overcoming the limitations of older connectionist models that once occupied the centre stage of philosophical debates about cognition. This development is directly relevant to long-standing theoretical debates in the philosophy of cognitive science. Furthermore, ongoing methodological challenges related to the comparative evaluation of deep neural networks stand to benefit greatly from interdisciplinary collaboration with philosophy and cognitive science. The time is ripe for philosophers to explore foundational issues related to deep learning and cognition; this perspective paper surveys key areas where their contributions can be especially fruitful.
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- North America > Dominican Republic (0.04)
- (2 more...)
Exploring a Cognitive Architecture for Learning Arithmetic Equations
The acquisition and performance of arithmetic skills and basic operations such as addition, subtraction, multiplication, and division are essential for daily functioning, and reflect complex cognitive processes. This paper explores the cognitive mechanisms powering arithmetic learning, presenting a neurobiologically plausible cognitive architecture that simulates the acquisition of these skills. I implement a number vectorization embedding network and an associative memory model to investigate how an intelligent system can learn and recall arithmetic equations in a manner analogous to the human brain. I perform experiments that provide insights into the generalization capabilities of connectionist models, neurological causes of dyscalculia, and the influence of network architecture on cognitive performance. Through this interdisciplinary investigation, I aim to contribute to ongoing research into the neural correlates of mathematical cognition in intelligent systems.
- North America > United States > California (0.14)
- North America > Canada > Ontario > Toronto (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Education > Curriculum > Subject-Specific Education (0.72)
Minkowski-r Back-Propagation: Learning in Connectionist Models with Non-Euclidian Error Signals
Many connectionist learning models are implemented using a gradient descent in a least squares error function of the output and teacher signal. For small r's a "city-block" error metric is approximated and for large r's the "maximum" or "supremum" metric is approached. An implementation of Minkowski-r back-propagation is described. Different r values may be appropriate for the reduction of the effects of outliers (noise).
Direct memory access using two cues: Finding the intersection of sets in a connectionist model
For lack of alternative models, search and decision processes have provided the dominant paradigm for human memory access using two or more cues, despite evidence against search as an access process (Humphreys, Wiles & Bain, 1990). We present an alternative process to search, based on calculating the intersection of sets of targets activated by two or more cues. Two methods of computing the intersection are presented, one using information about the possible targets, the other constraining the cue-target strengths in the memory matrix. Analysis using orthogonal vectors to represent the cues and targets demonstrates the competence of both processes, and simulations using sparse distributed representations demonstrate the performance of the latter process for tasks involving 2 and 3 cues.
ALCOVE: A Connectionist Model of Human Category Learning
ALCOVE is a connectionist model of human category learning that fits a broad spectrum of human learning data. Its architecture is based on well(cid:173) established psychological theory, and is related to networks using radial basis functions. From the perspective of cognitive psychology, ALCOVE can be construed as a combination of exemplar-based representation and error(cid:173) driven learning. From the perspective of connectionism, it can be seen as incorporating constraints into back-propagation networks appropriate for modelling human learning.
Analogy-- Watershed or Waterloo? Structural alignment and the development of connectionist models of analogy
Neural network models have been criticized for their inability to make use of compositional representations. In this paper, we describe a series of psychological phenomena that demonstrate the role of structured representations in cognition. These findings suggest that people compare relational representations via a process of structural alignment. This process will have to be captured by any model of cognition, symbolic or subsymbolic.
Connectionist Models for Auditory Scene Analysis
Although the visual and auditory systems share the same basic tasks of informing an organism about its environment, most con(cid:173) nectionist work on hearing to date has been devoted to the very different problem of speech recognition . VVe believe that the most fundamental task of the auditory system is the analysis of acoustic signals into components corresponding to individual sound sources, which Bregman has called auditory scene analysis . Computational and connectionist work on auditory scene analysis is reviewed, and the outline of a general model that includes these approaches is described.
A Connectionist Model of the Owl's Sound Localization System
To address this problem we built a computational model of development in the owl's sound localization system. The structure of the model is drawn from known experimental data while the learning principles come from recent work in the field of brain style computation. The model accounts for numerous properties of the owl's sound localization system, makes specific and testable predictions for future experi(cid:173) ments, and provides a theory of the developmental process.
Information Factorization in Connectionist Models of Perception
We examine a psychophysical law that describes the influence of stimulus and context on perception. It has been argued that this pat(cid:173) tern of results is incompatible with feedback models of perception. In this paper we examine this claim using neural network models defined via stochastic differential equations. We show that the law is related to a condition named channel separability and has little to do with the existence of feedback connections. In essence, chan(cid:173) nels are separable if they converge into the response units without direct lateral connections to other channels and if their sensors are not directly contaminated by external inputs to the other chan(cid:173) nels. Implications of the analysis for cognitive and computational neurosicence are discussed.